At the opening of the Kwame Nkrumah University of Science and Technology (KNUST)’s 11th Summer School themed “Artificial Intelligence in Education”, experts called for new ways of assessing learning outcomes in the era of artificial intelligence.
The Chief of Section for Technology and AI in Education at UNESCO, Dr. Shafika Isaacs, urged participants to develop evaluation methods that focus on analytical and critical thinking rather than on potentially AI-generated content.
“We need to find ways that invite the assessment of critical thinking, not just the assessment of an output that could have easily been written by an AI,” she said.
Dr. Isaacs warned of growing complacency in academic settings linked to overreliance on AI tools.
“Report after report is revealing the extent to which many students, university lecturers and professors are in fact engaged in academic laziness, which is the result of cognitive offloading,” she said.
She stressed the need for a systemic shift in curriculum design, teaching and assessment methods.
“This change is inviting us to rethink our pedagogical architectures. We have got to rethink how we define learning and teaching in the age of AI. How do we, very importantly, reframe and restructure our assessment systems?” she said.
Founder and Chief Executive Officer of MinoHealth AI Labs and KaraAgro AI, Mr. Darlington Ahiale Akogo, highlighted the need for mutual growth within the education system, describing AI as a natural step in the evolution of learning.
“There definitely should be and would be an evolution of education. AI is the new step in that trajectory of evolution. It has taken it to a whole different level, but there has to be a two-way evolution,” he said.
He called on educators and students to nurture analytical reasoning and creativity alongside AI use.
“It is mainly about the mindset, you having the thinking capabilities so that when the AI system is doing it in the future, you can better supervise that AI. So there should be an evolution of both sides, then we will be fine,” he said.
Mr. Akogo also addressed growing concerns about AI’s potential misuse and loss of human oversight.
“If we just let AI systems develop to the point where we have no control over them or an actor can misuse them, then definitely we maximize the danger,” he said.
“Like the Vice-Chancellor said, we shouldn’t leave it to the AI systems to tell us how they should be used. We should influence that.”
He emphasized the importance of embedding local cultural values into AI systems to mitigate risks.
“When Americans build their AI systems with alignment, they embed their values. If we build our AI systems, what are the cultural values that we put in there to ensure that alignment? Those are how we minimize those dangers,” he said.
Mr. Akogo also discussed the use of external tools in AI implementation and the margin of error associated with generative AI models.
“When it comes to generative AI systems, they hallucinate. The way you reduce that margin of error is that you can always check the results that were produced by those external tools, and these are industry-established tools,” he said.
“That is the synergetic relationship that we should be having with AI when it comes to academia and research.”